The tracking-by-detection paradigm today has become the dominant method for multi-object tracking and works by detecting objects in each frame and then performing data association across frames. However, its sequential frame-wise matching property fundamentally suffers from the intermediate interruptions in a video, such as object occlusions, fast camera movements, and abrupt light changes. Moreover, it typically overlooks temporal information beyond the two frames for matching. In this paper, we investigate an alternative by treating object association as clip-wise matching. Our new perspective views a single long video sequence as multiple short clips, and then the tracking is performed both within and between the clips. The benefits of this new approach are two folds. First, our method is robust to tracking error accumulation or propagation, as the video chunking allows bypassing the interrupted frames, and the short clip tracking avoids the conventional error-prone long-term track memory management. Second, the multiple frame information is aggregated during the clip-wise matching, resulting in a more accurate long-range track association than the current frame-wise matching. Given the state-of-the-art tracking-by-detection tracker, QDTrack, we showcase how the tracking performance improves with our new tracking formulation. We evaluate our proposals on two tracking benchmarks, TAO and MOT17 that have complementary characteristics and challenges each other.
translated by 谷歌翻译
Scaling object taxonomies is one of the important steps toward a robust real-world deployment of recognition systems. We have faced remarkable progress in images since the introduction of the LVIS benchmark. To continue this success in videos, a new video benchmark, TAO, was recently presented. Given the recent encouraging results from both detection and tracking communities, we are interested in marrying those two advances and building a strong large vocabulary video tracker. However, supervisions in LVIS and TAO are inherently sparse or even missing, posing two new challenges for training the large vocabulary trackers. First, no tracking supervisions are in LVIS, which leads to inconsistent learning of detection (with LVIS and TAO) and tracking (only with TAO). Second, the detection supervisions in TAO are partial, which results in catastrophic forgetting of absent LVIS categories during video fine-tuning. To resolve these challenges, we present a simple but effective learning framework that takes full advantage of all available training data to learn detection and tracking while not losing any LVIS categories to recognize. With this new learning scheme, we show that consistent improvements of various large vocabulary trackers are capable, setting strong baseline results on the challenging TAO benchmarks.
translated by 谷歌翻译
最近,基于内存的方法显示了半监督视频对象分割的有希望的结果。这些方法可以通过对先前掩码的经常更新的内存来预测对象蒙版逐帧。与这种人均推断不同,我们通过将视频对象分割视为夹子掩盖传播来研究替代角度。在此每次CLIP推断方案中,我们使用一个间隔更新内存,并同时处理内存更新之间的一组连续帧(即剪辑)。该方案提供了两个潜在的好处:通过剪辑级优化和效率增益的准确性增益,通过平行计算多个帧。为此,我们提出了一种针对人均推理量身定制的新方法。具体而言,我们首先引入夹具操作,以根据CLIP相关性来完善特征。此外,我们采用了一种渐进匹配机制来在剪辑中有效地通过信息通行。通过两个模块的协同作用和新提议的每盘培训,我们的网络在YouTube-Vos 2018/2019 Val(84.6%和84.6%)和Davis 2016/2017 Val(91.9 Val(91.9 %和86.1%)。此外,我们的模型在不同的内存更新间隔内显示出巨大的速度准确性权衡取舍,从而带来了巨大的灵活性。
translated by 谷歌翻译
最近的研究通过将基于Trimap的图像垫子的成功扩展到视频域,在视频垫子上取得了长足进展。在本文中,我们将此任务推向了更实用的设置,并提出了仅使用一个用户宣传的Trimap来强制执行视频底表的单个TRIMAP视频效果网络(OTVM)。 OTVM的一个关键是Trimap传播和α预测的关节建模。从基线构架传播和α预测网络开始,我们的OTVM将两个网络与alpha-Trimap修补模块结合在一起,以促进信息流。我们还提出了一种端到端培训策略,以充分利用联合模型。与先前的解耦方法相比,我们的联合建模极大地提高了三张式传播的时间稳定性。我们在两个最新的视频底变基准测试中评估了我们的模型,深度视频垫子和视频图108,以及优于大量利润率的最先进(MSE改善分别为56.4%和56.7%)。源代码和模型可在线获得:https://github.com/hongje/otvm。
translated by 谷歌翻译
机器学习正在改变视频编辑行业。计算机视觉的最新进展已升级视频编辑任务,例如智能重新构图,旋转镜,颜色分级或应用数字化妆。但是,大多数解决方案都集中在视频操作和VFX上。这项工作介绍了视频编辑,数据集和基准测试的解剖结构,以促进AI辅助视频编辑研究。我们的基准套件专注于视频编辑任务,除了视觉效果之外,例如自动录像组织和辅助视频组装。为了对这些方面进行研究,我们注释了超过150万的标签,并从196176年从电影场景中取样了摄影作品。我们为每个任务建立竞争性基线方法和详细分析。我们希望我们的作品能够对AI辅助视频编辑的未经展开的领域进行创新的研究。
translated by 谷歌翻译
我们基于以下假设,即明确面向对象的信息可能是理解整个序列的上下文,我们介绍了一个新的范式用于离线视频实例分割(VIS)。为此,我们提出了Vita,这是一个简单的结构,建立在基于现成的变压器的图像实例分割模型之上。具体而言,我们使用图像对象检测器作为将特定于对象的上下文提炼为对象令牌的一种手段。 Vita通过在不使用时空主链功能的情况下关联框架级对象令牌来完成视频级别的理解。通过使用凝结信息在对象之间有效建立关系,Vita用Resnet-50骨架在VIS基准上实现了最新的关系:49.8 AP,45.7 AP在YouTube-VIS 2019和2021和2021和19.6 AP上的AP上的Ovis上。此外,由于其基于对象令牌的结构与骨干功能脱节,Vita显示了以前的离线VIS方法未探索的几个实际优势 - 使用常见的GPU处理长长和高分辨率的视频,并冻结框架级检测器在图像域进行训练。代码将在https://github.com/sukjunhwang/vita上提供。
translated by 谷歌翻译
我们通过对杂散相关性的因果解释提出了一种信息 - 理论偏置测量技术,这通过利用条件相互信息来识别特征级算法偏压有效。尽管已经提出了几种偏置测量方法并广泛地研究以在各种任务中实现诸如面部识别的各种任务中的算法公平,但它们的准确性或基于Logit的度量易于导致普通预测得分调整而不是基本偏差减少。因此,我们设计针对算法偏差的新型扩张框架,其包括由所提出的信息 - 理论偏置测量方法导出的偏压正则化损耗。此外,我们介绍了一种基于随机标签噪声的简单而有效的无监督的脱叠技术,这不需要明确的偏置信息监督。通过多种标准基准测试的广泛实验,在不同的现实情景中验证了所提出的偏差测量和脱叠方法。
translated by 谷歌翻译
We propose Convolutional Block Attention Module (CBAM), a simple yet effective attention module for feed-forward convolutional neural networks. Given an intermediate feature map, our module sequentially infers attention maps along two separate dimensions, channel and spatial, then the attention maps are multiplied to the input feature map for adaptive feature refinement. Because CBAM is a lightweight and general module, it can be integrated into any CNN architectures seamlessly with negligible overheads and is end-to-end trainable along with base CNNs. We validate our CBAM through extensive experiments on ImageNet-1K, MS COCO detection, and VOC 2007 detection datasets. Our experiments show consistent improvements in classification and detection performances with various models, demonstrating the wide applicability of CBAM. The code and models will be publicly available.
translated by 谷歌翻译
The 3D-aware image synthesis focuses on conserving spatial consistency besides generating high-resolution images with fine details. Recently, Neural Radiance Field (NeRF) has been introduced for synthesizing novel views with low computational cost and superior performance. While several works investigate a generative NeRF and show remarkable achievement, they cannot handle conditional and continuous feature manipulation in the generation procedure. In this work, we introduce a novel model, called Class-Continuous Conditional Generative NeRF ($\text{C}^{3}$G-NeRF), which can synthesize conditionally manipulated photorealistic 3D-consistent images by projecting conditional features to the generator and the discriminator. The proposed $\text{C}^{3}$G-NeRF is evaluated with three image datasets, AFHQ, CelebA, and Cars. As a result, our model shows strong 3D-consistency with fine details and smooth interpolation in conditional feature manipulation. For instance, $\text{C}^{3}$G-NeRF exhibits a Fr\'echet Inception Distance (FID) of 7.64 in 3D-aware face image synthesis with a $\text{128}^{2}$ resolution. Additionally, we provide FIDs of generated 3D-aware images of each class of the datasets as it is possible to synthesize class-conditional images with $\text{C}^{3}$G-NeRF.
translated by 谷歌翻译
Cellular automata (CA) captivate researchers due to teh emergent, complex individualized behavior that simple global rules of interaction enact. Recent advances in the field have combined CA with convolutional neural networks to achieve self-regenerating images. This new branch of CA is called neural cellular automata [1]. The goal of this project is to use the idea of idea of neural cellular automata to grow prediction machines. We place many different convolutional neural networks in a grid. Each conv net cell outputs a prediction of what the next state will be, and minimizes predictive error. Cells received their neighbors' colors and fitnesses as input. Each cell's fitness score described how accurate its predictions were. Cells could also move to explore their environment and some stochasticity was applied to movement.
translated by 谷歌翻译